This is a very brief, and incomplete, introduction to floating point numbers as
commonly used in modern computers. For more complete information, should you be
interrested, refer to

   The IEEE standard for floating point arithmetic.

   What evey programmer should known about floating point arithmetic.

which are easily found via Google.

It is common for numerical values to be expressed using a so-called  normalised 
scientific notation, e.g.

      -123.456  => -0.123456e+3
      0.0123456 => +0.123456e-1

where e+3 means time 10 to the power of 3, i.e. 1000, and e-1 means times 10 to
power of minus 1, i.e. 0.1.  Such numbers can be regarded as expressions of the
form (where 

      V = s * F * 10**E

where

    ** is the exponentiation operator
    V is the value of the number, which has no restrictions
    s is the sign which may only be +1 or -1
    F is a fraction in the range 0.1 <= M < 1.0
    E is the exponent which is a signed integer value

Note, restricting the range of F to be between 0.1 and 1.0 means that there  is
only one way to express a number in this notation. All unnormalised expressions
such as 12.34 can be reduced to their normalised  form, in this case 0.1234e+2.
However,  this form of normalisation requires that zero is a special case where
F=0.0 is allowed despite it being outside of the normal valid range for F.

Floating point numbers are most easily understood as a variation on this way of
expressing numerical values, the princple differences being

    The base is 2 rather than 10
    The fraction, F, is limited to the range 0.5 <= F < 1.0
    The precision of F is limited to a fixed number of bits
    The range of exponents, E, is limited to a fixed number of bits

In passing it may be observed that bases other than 2 have been used, but these
are unlikely to be encountered nowadays - other than on IBM mainframes.

Unlike the real numbers which have an infinite range and are  continuous,  i.e.
there are an infinite number of them in any finite interval, the floating point
numbers used to approximate them have both a finite range and precision so that
there only a finite number of floating point values.

Using the IEEE format for floating point numbers which is the only one that you
are likely to encounter on modern hardware, we have

   V = s * f * 2**e

where

   s, the sign, is a one bit field
   e, the exponent, is an 8 bit field (10 bits for double precision)
   f, the fraction, is a 23 bit field (53 bits for double precision)

which, for single precision floating point, could be assigned to fields in a 32
bit word as follows

   seeeeeeeefffffffffffffffffffffff

In the case of the f field a natural way to represent it would be such that
   
   100000000000000000000000 represents 0.5
   110000000000000000000000 represents 0.75
   101000000000000000000000 represents 0.625

etc., i.e. with a binaty point (like a decimal point) to the left of the field.
As with the normailsed decimal scientific notation the limiting on the range of
f to be greater than or equal to 0.5 and less than 1.0 means that there is only
one way to represent any floating point number using this format, but a special
unnormalised representation of all zeroes is required for 0.0.

However, for IEEE floating point, there are some modifications to the above

   1 - the exponent, e, which is a signed integer is represented
       by an unsigned integer by adding a bias of 127 for single
       precision numbers or 1022 for double precision numbers.

   2 - because the most significant bit of f is always 1 it need
       not be explicitly present in the bit pattern used for the
       floating point numbers.  The f field is therefore shifted
       left by one bit making the most significant bit implicit,
       it is always 1,  and providing space for an extra bit for
       the least significant bit of f.

For IEEE floating point are two further modifications to this scheme

   3 - The smallest exponent,  e=-127 or 0 after biasing,  is  a
       special case indicating that f is not normalised,  i.e. f
       is in the range 0.5 to 1.0 but is smaller than 0.5. These
       numbers do not have an implict leading 1 bit and they are
       refered to as denormalised numbers, or denorms for short.

       If both e and f are 0 the value of the number represented
       is, of course, 0.0 - but note that the s bit may be 0  or 
       1 giving us both +0.0 and -0.0 as distinct bit patterns.

       Because handling denorms other than 0.0 can be slow  many
       implementations treat all denorms as zero,  this has been
       particularly common with GPU hardware,  though newer GPUs
       can now often handle them fully.

   4 - The largest exponent, e=128 or 255 after biasing, is also
       treated as a special case.  If  the f field is itself all
       zeros then the number is treated as being infinite and is
       refered to as +inf or -inf depending on the value of  the
       s bit. Any other values for f represent an error known as
       a NaN (standing for Not A Number).  NaNs are generated by
       operations,  such as 0.0/0.0,  for which there is no well
       defined mathematical result.  The  IEEE standard actually
       defines two types of NaN,  quiet Nans and signaling NaNs,
       but we need not go into that here.

Some consequences of the above

   A - Not all real numbers can be represented  by  floating  point
       numbers and have to be rounded up or down to  their  nearest
       representable value.  This  includes  such simple numbers as
       one third.  As a result expressions X * ( 1.0 / X ) will not
       always evaluate to exactly 1 and floating point calculations
       should always be regarded as approximations - unless you are
       an expert, which very few people are, and I am certainly not
       one if them.

   B - The meaning of the least significant bit of a floating point
       number (as any other bit in the fraction)  varies  with  the
       magnitude of the number.  The  relative  precision is almost
       constant, but absolute precision decreases with magnitude.

   C - "Small" integers, up to 23 bits in size,  can be represented
       exactly as can small negative powers of 2 such as  0.5, 0.25
       etc. and their sums such as 0.75 and as long you are careful
       to only use such numbers you can do exact calculations using
       floating point. But you really do need to be careful.

   D - Above a value of 2**24 all floating point values are integer
       values, above 2**25 all are even integers etc.

   E - Adding a small number to a much larger number or subtracting 
       a small number from a much larger number may have no effect.
       In particular adding 1.0 to a  very  large  number  may  not
       change its value.

   F - Any calculation involving a NaN gives a NaN as a result  and
       any function that can not evaluate a result, e.g. because an
       argument is out of range, may return a NaN. However, not all
       functions that are passed a NaN will respect this  rule  and
       may return what is often a spurious value - some of the GLSL
       standard functions are like this,  though they may vary from
       one implementation to another (for example in my laptop when
       using its Intel Integrated Graphics  Processor  pow(a,b)  is
       evaluated as pow(abs(a),b) when a is negative but returns  a
       NaN when the NVIDIA GPU is used).  An  oddity  of  the  IEEE
       floating point standard is it mandating   that  pow(0.0,0.0)
       should return 1.0 despite it being mathematicaly undefined.

   G - There is a maximum finite floating point  number.  Adding  a
       positive number to it may either have no effect or may cause
       the value to overflow and become infinite depending on   the
       size of the value being added.

   H - Any comparison with a NaN yields the result "unordered".  As
       a result all of the following should evaluate to false

           A < NaN,  A <= NaN,  A == NaN,  A >= NaN,  A > NaN
           NaN < A,  NaN <= A,  NaN == A,  NaN >= A,  NaN > A

       this is also true if A is itself a NaN, so A == A should not
       evaluate to true if A is a NaN,  which  can be a surprise if
       you are not aware of the possibility.

   I - Although there are two zeros, +0.0 and -0.0, they compare as
       being equal.  You have to be more careful if there is a need
       to distinguish between them.

   J - 1.0 / +inf => +0.0 and 1.0 / -inf => -0.0,  but despite this
       both 1.0 / +0.0 and 1.0 / -0.0 => +inf.  Many  people regard 
       this is a mistake, but it is mandated by the IEEE standard.
 
For most simple calculations you can forget about most of this other  than  the
rule of thumb that floating point calculations are approximations, but whenever
very large numbers, or very small numbers, are involved, if extreme accuracy is
required or long chains of calculations are used then it is useful to be  aware
of the niceties of floating point as it may help you avoid some of the pitfalls
or at least to recognise some of the artefacts that may arise from its use.

The Complex Functions shader used with the experiments with Complex   Functions
has not been,  for the most part,  written in a naive manner for simplicity and
speed of execution. as a result it does exhibit a few artefacts that might have
been avoided with more careful coding, but only at great expense with regard to
execution speed (e.g. using multi-precision arithmetic) but such approaches are
not practical for the intended use of the shader.

As an example,  the  large black areas due to NaNs that are seen in some of the
images are in most cases almost certainly due evaluating 0.0/0.0 where the  two
zeroes are themselves the result of rounding down very small values.  If a high
precision arithmetic package were used the rounding down would not occur and  a
proper value would be obtained for the point. However this is not practical for
the use of the shader in VGHD both in terms of effort and computational speed.